Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 30
1.
Acad Med ; 2024 Mar 28.
Article En | MEDLINE | ID: mdl-38551950

PURPOSE: This study examined whether the order of podcast content influenced knowledge acquisition and retention among emergency medicine (EM) resident physicians. METHOD: This preplanned secondary analysis of 2 large, multicenter trials included a randomized, crossover trial conducted from November 2019 to June 2020 of 100 residents that compared driving and seated condition for two 30-minute podcasts and a randomized, crossover trial conducted from September 2022 to January 2023 of 95 EM residents that compared exercise with seated condition for the same two 30-minute podcasts. Each podcast contained 6 journal article reviews, with the segments recorded in forward or backward order. After completing each podcast, participants completed an initial 20-question test and a 40-question delayed recall test with separate questions. Segments were divided into 3 subgroups based on the order in which they were played (primacy group, recency group, and reference group) for assessment of recency and primacy effects. The mean scaled scores from the primacy and recency groups were compared with scores from the reference group. RESULTS: The study included 195 residents (390 podcasts), with 100 residents listening in the forward order and 95 residents the reverse order. No statistically significant difference was found in immediate recall scores between the primacy and reference groups (d = 0.094; 95% CI, -0.046 to 0.234) or the recency and reference groups (d = -0.041; 95% CI, -0.181 to 0.099) or in 30-day delayed recall score between the primacy and reference groups (d = -0.088; 95% CI, -0.232 to 0.056) or the recency and reference groups (d = -0.083; 95% CI, -0.227 to 0.060). CONCLUSIONS: The order of podcast information did not significantly affect immediate knowledge acquisition or delayed knowledge retention. This finding can inform podcast creators and listeners regarding the order of content when using podcasts for learning.

3.
Acad Med ; 99(2): 139-145, 2024 Feb 01.
Article En | MEDLINE | ID: mdl-37406284

ABSTRACT: Meaningful improvements to graduate medical education (GME) have been achieved in recent decades, yet many GME improvement pilots have been small trials without rigorous outcome measures and with limited generalizability. Thus, lack of access to large-scale data is a key barrier to generating empiric evidence to improve GME. In this article, the authors examine the potential of a national GME data infrastructure to improve GME, review the output of 2 national workshops on this topic, and propose a path toward achieving this goal.The authors envision a future where medical education is shaped by evidence from rigorous research powered by comprehensive, multi-institutional data. To achieve this goal, premedical education, undergraduate medical education, GME, and practicing physician data must be collected using a common data dictionary and standards and longitudinally linked using unique individual identifiers. The envisioned data infrastructure could provide a foundation for evidence-based decisions across all aspects of GME and help optimize the education of individual residents.Two workshops hosted by the National Academies of Sciences, Engineering, and Medicine Board on Health Care Services explored the prospect of better using GME data to improve education and its outcomes. There was broad consensus about the potential value of a longitudinal data infrastructure to improve GME. Significant obstacles were also noted.Suggested next steps outlined by the authors include producing a more complete inventory of data already being collected and managed by key medical education leadership organizations, pursuing a grass-roots data sharing pilot among GME-sponsoring institutions, and formulating the technical and governance frameworks needed to aggregate data across organizations.The power and potential of big data is evident across many disciplines, and the authors believe that harnessing the power of big data in GME is the best next step toward advancing evidence-based physician education.


Education, Medical , Internship and Residency , Medicine , Humans , Data Aggregation , Education, Medical, Graduate , Educational Status
4.
Acad Med ; 99(5): 575-581, 2024 May 01.
Article En | MEDLINE | ID: mdl-38109353

PURPOSE: Podcasts are commonly used by residents as part of their learning, with many listening concomitantly with other activities (e.g., driving and exercise). The effects of exercise on learning are controversial, with some suggesting potential benefit and others suggesting impaired learning. This study examined whether exercise influences knowledge acquisition and retention among resident physicians listening to a podcast while exercising versus those with undistracted listening. METHOD: This multicenter, randomized, crossover trial assessed emergency medicine residents across 5 U.S. institutions from September 2022 to January 2023. Residents were randomized to a group that listened to one 30-minute podcast while seated or a group that listened to a 30-minute podcast while engaging in 30 minutes of continuous aerobic exercise, with stratification by site and postgraduate year. Within 30 minutes of completing the podcast, they completed a 20-question multiple-choice test. They subsequently crossed over to the other intervention and listened to a different 30-minute podcast followed by another 20-question test. Each podcast focused on emergency medicine-relevant journal articles that had not been covered in journal club or curriculum at any sites. Residents also completed a 40-question delayed recall test with separate questions on both podcasts at 30 days. RESULTS: Ninety-six residents were recruited for the study, with 95 (99.0%) completing the initial recall portion and 92 (97.0%) completing the delayed recall tests. No statistically significant differences were found between the exercise and seated cohorts on initial recall (74.4% vs 76.3%; d = -0.12; 95% CI, -0.33 to 0.08; P = .12) or delayed recall (52.3% vs 52.5%; d = -0.01; 95% CI, -0.22 to -0.19; P = .46). CONCLUSIONS: Exercising while listening to podcasts did not appear to meaningfully affect knowledge acquisition or retention at 30 days when compared with listening while seated and undistracted.


Cross-Over Studies , Emergency Medicine , Exercise , Internship and Residency , Webcasts as Topic , Humans , Internship and Residency/methods , Exercise/psychology , Emergency Medicine/education , Female , Male , United States , Retention, Psychology , Adult , Educational Measurement/methods
5.
Perspect Med Educ ; 12(1): 25-40, 2023.
Article En | MEDLINE | ID: mdl-36908747

Background: In medical education, there is a growing global demand for Open Educational Resources (OERs). However, OER creators are challenged by a lack of uniform standards. In this guideline, the authors curated the literature on how to produce OERs for medical education with practical guidance on the Do's, Don'ts and Don't Knows for OER creation in order to improve the impact and quality of OERs in medical education. Methods: We conducted a rapid literature review by searching OVID MEDLINE, EMBASE, and Cochrane Central database using keywords "open educational resources" and "OER". The search was supplemented by hand searching the identified articles' references. We organized included articles by theme and extracted relevant content. Lastly, we developed recommendations via an iterative process of peer review and discussion: evidence-based best practices were designated Do's and Don'ts while gaps were designated Don't Knows. We used a consensus process to quantify evidentiary strength. Results: The authors performed full text analysis of 81 eligible studies. A total of 15 Do's, Don't, and Don't Knows guidelines were compiled and presented alongside relevant evidence about OERs. Discussion: OERs can add value for medical educators and their learners, both as tools for expanding teaching opportunities and for promoting medical education scholarship. This summary should guide OER creators in producing high-quality resources and pursuing future research where best practices are lacking.


Education, Medical , Humans
6.
AEM Educ Train ; 7(1): e10842, 2023 Feb.
Article En | MEDLINE | ID: mdl-36777102

Background: Feedback and assessment are difficult to provide in the emergency department (ED) setting despite their critical importance for competency-based education, and traditional end-of-shift evaluations (ESEs) alone may be inadequate. The SIMPL (Society for Improving Medical Professional Learning) mobile application has been successfully implemented and studied in the operative setting for surgical training programs as a point-of-care tool that incorporates three assessment scales in addition to dictated feedback. SIMPL may represent a viable tool for enhancing workplace-based feedback and assessment in emergency medicine (EM). Methods: We implemented SIMPL at a 4-year EM residency program during a pilot study from March to June 2021 for observable activities such as medical resuscitations and related procedures. Faculty and residents underwent formal rater training prior to launch and were asked to complete surveys regarding the SIMPL app's content, usability, and future directions at the end of the pilot. Results: A total of 36/58 (62%) of faculty completed at least one evaluation, for a total of 190 evaluations and an average of three evaluations per faculty. Faculty initiated 130/190 (68%) and residents initiated 60/190 (32%) evaluations. Ninety-one percent included dictated feedback. A total of 45/54 (83%) residents received at least one evaluation, with an average of 3.5 evaluations per resident. Residents generally agreed that SIMPL increased the quality of feedback received and that they valued dictated feedback. Residents generally did not value the numerical feedback provided from SIMPL. Relative to the residents, faculty overall responded more positively toward SIMPL. The pilot generated several suggestions to inform the optimization of the next version of SIMPL for EM training programs. Conclusions: The SIMPL app, originally developed for use in surgical training programs, can be implemented for use in EM residency programs, has positive support from faculty, and may provide important adjunct information beyond current ESEs.

7.
AEM Educ Train ; 7(1): e10839, 2023 Feb.
Article En | MEDLINE | ID: mdl-36711254

Background: Didactics play a key role in medical education. There is no standardized didactic evaluation tool to assess quality and provide feedback to instructors. Cognitive load theory provides a framework for lecture evaluations. We sought to develop an evaluation tool, rooted in cognitive load theory, to assess quality of didactic lectures. Methods: We used a modified Delphi method to achieve expert consensus for items in a lecture evaluation tool. Nine emergency medicine educators with expertise in cognitive load participated in three modified Delphi rounds. In the first two rounds, experts rated the importance of including each item in the evaluation rubric on a 1 to 9 Likert scale with 1 labeled as "not at all important" and 9 labeled as "extremely important." In the third round, experts were asked to make a binary choice of whether the item should be included in the final evaluation tool. In each round, the experts were invited to provide written comments, edits, and suggested additional items. Modifications were made between rounds based on item scores and expert feedback. We calculated descriptive statistics for item scores. Results: We completed three Delphi rounds, each with 100% response rate. After Round 1, we removed one item, made major changes to two items, made minor wording changes to nine items, and modified the scale of one item. Following Round 2, we eliminated three items, made major wording changes to one item, and made minor wording changes to one item. After the third round, we made minor wording changes to two items. We also reordered and categorized items for ease of use. The final evaluation tool consisted of nine items. Conclusions: We developed a lecture assessment tool rooted in cognitive load theory specific to medical education. This tool can be applied to assess quality of instruction and provide important feedback to speakers.

8.
Med Teach ; 44(12): 1313-1331, 2022 12.
Article En | MEDLINE | ID: mdl-36369939

BACKGROUND: The COVID-19 pandemic caused graduate medical education (GME) programs to pivot to virtual interviews (VIs) for recruitment and selection. This systematic review synthesizes the rapidly expanding evidence base on VIs, providing insights into preferred formats, strengths, and weaknesses. METHODS: PubMed/MEDLINE, Scopus, ERIC, PsycINFO, MedEdPublish, and Google Scholar were searched from 1 January 2012 to 21 February 2022. Two authors independently screened titles, abstracts, full texts, performed data extraction, and assessed risk of bias using the Medical Education Research Quality Instrument. Findings were reported according to Best Evidence in Medical Education guidance. RESULTS: One hundred ten studies were included. The majority (97%) were from North America. Fourteen were conducted before COVID-19 and 96 during the pandemic. Studies involved both medical students applying to residencies (61%) and residents applying to fellowships (39%). Surgical specialties were more represented than other specialties. Applicants preferred VI days that lasted 4-6 h, with three to five individual interviews (15-20 min each), with virtual tours and opportunities to connect with current faculty and trainees. Satisfaction with VIs was high, though both applicants and programs found VIs inferior to in-person interviews for assessing 'fit.' Confidence in ranking applicants and programs was decreased. Stakeholders universally noted significant cost and time savings with VIs, as well as equity gains and reduced carbon footprint due to eliminating travel. CONCLUSIONS: The use of VIs for GME recruitment and selection has accelerated rapidly. The findings of this review offer early insights that can guide future practice, policy, and research.


COVID-19 , Education, Medical , Internship and Residency , Humans , Pandemics , COVID-19/epidemiology , Education, Medical, Graduate , Fellowships and Scholarships
9.
AEM Educ Train ; 6(1): e10718, 2022 Feb.
Article En | MEDLINE | ID: mdl-35112038

BACKGROUND: COVID necessitated the shift to virtual resident instruction. The challenge of learning via virtual modalities has the potential to increase cognitive load. It is important for educators to reduce cognitive load to optimize learning, yet there are few available tools to measure cognitive load. The objective of this study is to identify and provide validity evidence following Messicks' framework for an instrument to evaluate cognitive load in virtual emergency medicine didactic sessions. METHODS: This study followed Messicks' framework for validity including content, response process, internal structure, and relationship to other variables. Content validity evidence included: (1) engagement of reference librarian and literature review of existing instruments; (2) engagement of experts in cognitive load, and relevant stakeholders to review the literature and choose an instrument appropriate to measure cognitive load in EM didactic presentations. Response process validity was gathered using the format and anchors of instruments with previous validity evidence and piloting amongst the author group. A lecture was provided by one faculty to four residency programs via ZoomTM. Afterwards, residents completed the cognitive load instrument. Descriptive statistics were collected; Cronbach's alpha assessed internal consistency of the instrument; and correlation for relationship to other variables (quality of lecture). RESULTS: The 10-item Leppink Cognitive Load instrument was selected with attention to content and response process validity evidence. Internal structure of the instrument was good (Cronbach's alpha = 0.80). Subscales performed well-intrinsic load (α = 0.96, excellent), extrinsic load (α = 0.89, good), and germane load (α = 0.97, excellent). Five of the items were correlated with overall quality of lecture (p < 0.05). CONCLUSIONS: The 10-item Cognitive Load instrument demonstrated good validity evidence to measure cognitive load and the subdomains of intrinsic, extraneous, and germane load. This instrument can be used to provide feedback to presenters to improve the cognitive load of their presentations.

10.
West J Emerg Med ; 23(1): 103-107, 2022 01 03.
Article En | MEDLINE | ID: mdl-35060873

INTRODUCTION: Residency didactic conferences transitioned to a virtual format during the COVID-19 pandemic. This format creates questions about effective educational practices, which depend on learner engagement. In this study we sought to characterize the competitive demands for learner attention during virtual didactics and to pilot methodology for future studies. METHODS: This was a prospective, observational, cohort study of attendees at virtual didactics from a single emergency medicine residency, which employed a self-report strategy informed by validated classroom assessments of student engagement. We deployed an online, two-question survey polling across six conference days using random signaled sampling. Participants reported all activities during the preceding five minutes. RESULTS: There were 1303 responses over 40 survey deployments across six nonadjacent days. Respondents were residents (63.4%); faculty (27.5%); fellows (2.3%); students (2%); and others (4.8%). Across all responses, about 85% indicated engagement in the virtual conference within the last five minutes of the polls. The average number of activities engaged in was 2.0 (standard deviation = 1.1). Additional activities included education-related (34.2%), work-related (21.1%), social (18.8%), personal (14.6%), self-care (13.4%), and entertainment (4.4%). CONCLUSION: Learners engage in a variety of activities during virtual didactics. Engagement appears to fluctuate temporally, which may inform teaching strategies. This information may also provide unique instructor feedback. This pilot study demonstrates methodology for future studies of conference engagement and learning outcomes.


COVID-19 , Emergency Medicine , Cohort Studies , Humans , Pandemics , Pilot Projects , Prospective Studies , SARS-CoV-2
11.
Acad Med ; 96(9): 1276-1281, 2021 09 01.
Article En | MEDLINE | ID: mdl-34432665

The clinical learning environment (CLE) encompasses the learner's personal characteristics and experiences, social relationships, organizational culture, and the institution's physical and virtual infrastructure. During the COVID-19 pandemic, all 4 of these parts of the CLE have undergone a massive and rapid disruption. Personal and social communications have been limited to virtual interactions or shifted to unfamiliar clinical spaces because of redeployment. Rapid changes to the organizational culture required prompt adaptations from learners and educators in their complex organizational systems yet caused increased confusion and anxiety among them. A traditional reliance on a physical infrastructure for classical educational practices in the CLE was challenged when all institutions had to undergo a major transition to a virtual learning environment. However, disruptions spurred exciting innovations in the CLE. An entire cohort of physicians and learners underwent swift adjustments in their personal and professional development and identity as they rose to meet the clinical and educational challenges they faced due to COVID-19. Social networks and collaborations were expanded beyond traditional institutional walls and previously held international boundaries within multiple specialties. Specific aspects of the organizational and educational culture, including epidemiology, public health, and medical ethics, were brought to the forefront in health professions education, while the physical learning environment underwent a rapid transition to a virtual learning space. As health professions education continues in the era of COVID-19 and into a new era, educators must take advantage of these dynamic systems to identify additional gaps and implement meaningful change. In this article, health professions educators and learners from multiple institutions and specialties discuss the gaps and weaknesses exposed, opportunities revealed, and strategies developed for optimizing the CLE in the post-COVID-19 world.


COVID-19/prevention & control , Education, Distance/methods , Education, Medical/methods , Learning , Physical Distancing , Students, Medical/psychology , Cooperative Behavior , Education, Distance/organization & administration , Education, Medical/organization & administration , Humans , Interdisciplinary Placement , Organizational Culture , Social Environment , Social Networking , United States
13.
AEM Educ Train ; 5(3): e10628, 2021 Jul.
Article En | MEDLINE | ID: mdl-34222757

BACKGROUND: Educational autopsy (EA) is an innovative technique designed to improve the quality of feedback provided to conference presenters. In response to survey fatigue and suboptimal feedback from online evaluations, this postlecture group debrief was adapted to emergency medicine residency didactics, with a goal of collecting timely, specific, and balanced feedback for presenters. Other aims include encouraging participants to think critically about educational methods and providing presenters with formal feedback for a portfolio or promotion packet. It was hypothesized that EA provides more specific and actionable feedback than traditional online evaluations deployed individually to conference attendees. METHODS: The authors analyzed 4 months of evaluations pre- and postimplementation of EA. Rate of completion, presence of comments, and types of comments were compared. Comments were coded as specific, nonspecific, and unrelated/unclear. Specific comments were further categorized as about audiovisual presentation design, speaker presentation style, and educational methods of the session. RESULTS: A total of 46 of 65 (71%) preimplementation presentations eligible for evaluation received comments through traditional online evaluations. A total of 44 of 75 (59%) eligible postimplementation presentations generated comments via EA. Among presentations that received comments, none received nonspecific comments via EA, compared to 46% of lectures through traditional evaluations. EA generated specific comments for more presentations regarding presentation design (91% vs. 63%), presentation style (66% vs. 24%), and educational methods (48% vs. 28%). EA produced no unclear comments; traditional evaluations resulted in unclear comments for 15% of lectures. CONCLUSIONS: EA generated more specific feedback for residency conference presenters, although there were a number of sessions not evaluated by EA. Although this limited analysis suggested that EA produced higher-quality presenter feedback, it also showed a drop-off in the proportion of didactic sessions that received narrative feedback.

20.
AEM Educ Train ; 4(4): 435-437, 2020 Oct.
Article En | MEDLINE | ID: mdl-33150290

BACKGROUND: The emergency department environment requires the clinician-educator to use adaptive teaching strategies to balance education with efficiency and patient care. Recently, alternative approaches to the traditional serial trainee-attending patient evaluation model have emerged in the literature. METHODS: The parallel encounter involves the attending physician and resident seeing the patient independently. Instead of the trainee delivering a traditional oral case presentation, the trainee does not present the history and examination to the attending physician. Rather, the attending and trainee come together following their independent evaluations to jointly discuss and formulate the assessment and plan. RESULTS: The parallel encounter has the potential to enhance the teaching encounter by emphasizing clinical reasoning, reduce cognitive bias by integrating two independent assessments of the same patient, increase attending workflow flexibility and efficiency, and improve patient satisfaction and outcomes by reducing time to initial provider contact. The attending must be mindful of protecting resident autonomy. This model tends to work better for more senior learners. CONCLUSIONS: The parallel encounter represents a novel approach to the traditional serial trainee-attending patient evaluation model that may enhance the teaching encounter and improve patient care.

...